32 research outputs found

    Empirical Investigation of Factors influencing Function as a Service Performance in Different Cloud/Edge System Setups

    Full text link
    Experimental data can aid in gaining insights about a system operation, as well as determining critical aspects of a modelling or simulation process. In this paper, we analyze the data acquired from an extensive experimentation process in a serverless Function as a Service system (based on the open source Apache Openwhisk) that has been deployed across 3 available cloud/edge locations with different system setups. Thus, they can be used to model distribution of functions through multi-location aware scheduling mechanisms. The experiments include different traffic arrival rates, different setups for the FaaS system, as well as different configurations for the hardware and platform used. We analyse the acquired data for the three FaaS system setups and discuss their differences presenting interesting conclusions with relation to transient effects of the system, such as the effect on wait and execution time. We also demonstrate interesting trade-offs with relation to system setup and indicate a number of factors that can affect system performance and should be taken under consideration in modelling attempts of such systems.Comment: 24 pages, 14 Figures, Journal pape

    Challenges Emerging from Future Cloud Application Scenarios

    Get PDF
    The cloud computing paradigm encompasses several key differentiating elements and technologies, tackling a number of inefficiencies, limitations and problems that have been identified in the distributed and virtualized computing domain. Nonetheless, and as it is the case for all emerging technologies, their adoption led to the presentation of new challenges and new complexities. In this paper we present key application areas and capabilities of future scenarios, which are not tackled by current advancements and highlight specific requirements and goals for advancements in the cloud computing domain. We discuss these requirements and goals across different focus areas of cloud computing, ranging from cloud service and application integration, development environments and abstractions, to interoperability and relevant to it aspects such as legislation. The future application areas and their requirements are also mapped to the aforementioned areas in order to highlight their dependencies and potential for moving cloud technologies forward and contributing towards their wider adoption

    Quantifying cloud performance and dependability:Taxonomy, metric design, and emerging challenges

    Get PDF
    In only a decade, cloud computing has emerged from a pursuit for a service-driven information and communication technology (ICT), becoming a significant fraction of the ICT market. Responding to the growth of the market, many alternative cloud services and their underlying systems are currently vying for the attention of cloud users and providers. To make informed choices between competing cloud service providers, permit the cost-benefit analysis of cloud-based systems, and enable system DevOps to evaluate and tune the performance of these complex ecosystems, appropriate performance metrics, benchmarks, tools, and methodologies are necessary. This requires re-examining old system properties and considering new system properties, possibly leading to the re-design of classic benchmarking metrics such as expressing performance as throughput and latency (response time). In this work, we address these requirements by focusing on four system properties: (i) elasticity of the cloud service, to accommodate large variations in the amount of service requested, (ii) performance isolation between the tenants of shared cloud systems and resulting performance variability, (iii) availability of cloud services and systems, and (iv) the operational risk of running a production system in a cloud environment. Focusing on key metrics for each of these properties, we review the state-of-the-art, then select or propose new metrics together with measurement approaches. We see the presented metrics as a foundation toward upcoming, future industry-standard cloud benchmarks

    Leveraging data-driven infrastructure management to facilitate AIOps for big data applications and operations

    Get PDF
    As institutions increasingly shift to distributed and containerized application deployments on remote heterogeneous cloud/cluster infrastructures, the cost and difficulty of efficiently managing and maintaining data-intensive applications have risen. A new emerging solution to this issue is Data-Driven Infrastructure Management (DDIM), where the decisions regarding the management of resources are taken based on data aspects and operations (both on the infrastructure and on the application levels). This chapter will introduce readers to the core concepts underpinning DDIM, based on experience gained from development of the Kubernetes-based BigDataStack DDIM platform (https://bigdatastack.eu/). This chapter involves multiple important BDV topics, including development, deployment, and operations for cluster/cloud-based big data applications, as well as data-driven analytics and artificial intelligence for smart automated infrastructure self-management. Readers will gain important insights into how next-generation DDIM platforms function, as well as how they can be used in practical deployments to improve quality of service for Big Data Applications. This chapter relates to the technical priority Data Processing Architectures of the European Big Data Value Strategic Research & Innovation Agenda [33], as well as the Data Processing Architectures horizontal and Engineering and DevOps for building Big Data Value vertical concerns. The chapter relates to the Reasoning and Decision Making cross-sectorial technology enablers of the AI, Data and Robotics Strategic Research, Innovation & Deployment Agenda [34]

    Software modernization and cloudification using the ARTIST migration methodology and framework

    Get PDF
    International audienceCloud computing has leveraged new software development and provisioning approaches by changing the way computing, storage and networking resources are purchased and consumed. The variety of cloud offerings on both technical and business level has considerably advanced the development process and established new business models and value chains for applications and services. However, the modernization and cloudification of legacy software so as to be offered as a service still encounters many challenges. In this work, we present a complete methodology and a methodology instantiation framework for the effective migration of legacy software to modern cloud environments

    Techniques and mechanisms for modeling and predicting performance in service oriented applications and infrastructures

    No full text
    286 σ.Η ανάπτυξη υπηρεσιοστρεφών υποδομών παροχής λογισμικού και υλισμικού ως υπηρεσία καθιστά εφικτή την χρησιμοποίηση αυτών από εξωτερικούς χρήστες με τη μορφή πληρωμής με βάση τη χρήση. Για την εξασφάλιση της απαιτούμενης ποιότητας υπηρεσίας στην παροχή αυτών των υπηρεσιών, είναι απαραίτητος ένας διαστρωματικός μηχανισμός αντιστοίχισης και μοντελοποίησης των χαρακτηριστικών της εφαρμογής που προσφέρονται σαν όροι στα συμβόλαια επιπέδου υπηρεσιών σε χαρακτηριστικά φυσικών πόρων εκτέλεσης. Στην διαδικασία αυτή πρέπει να ξεπεραστούν αγκυλώσεις και περιορισμοί που προκύπτουν λόγω της ύπαρξης διαφορετικών οντοτήτων (Πάροχος Εφαρμογής, Πάροχος Πλατφόρμας και Πάροχος Υποδομών) κατά μήκος της αλυσίδας αξίας των υπηρεσιοστρεφών υποδομών/αρχιτεκτονικών. Στην παρούσα διατριβή αναλύονται διαφορετικές υποψήφιες μεθοδολογίες δυναμικής δημιουργίας μοντέλων εφαρμογών σε υπηρεσιοστρεφείς υποδομές και επιλέγεται η καταλληλότερη σύμφωνα με τους προαναφερθέντες περιορισμούς. Επιπλέον αναλύονται οι αδυναμίες αυτής και εφαρμόζονται καινοτόμοι τρόποι αντιμετώπισής τους, ώστε να είναι εφικτή η εφαρμογή της στο ζητούμενο πλαίσιο. Η τελική μορφή αυτής (γενετικά βελτιστοποιημένα νευρωνικά δίκτυα) δοκιμάζεται σε διαφορετικού τύπου εφαρμογές με στόχο την πρόβλεψη της απόδοσής τους με βάση το χρησιμοποιούμενο υλικό και την διαμόρφωσή τους. Επιπλέον, αναπτύσεται ένας μηχανισμός 2 επιπέδων που επεκτείνει την χρήση της και σε περιπτώσεις μη προκαθορισμένης και γνωστής ζήτησης της εφαρμογής. Μέσω της ανάλυσης των ιστορικών στοιχείων χρήσης αναγνωρίζονται μοτίβα συμπεριφοράς και δημιουργείται το κατάλληλο πλαίσιο συνδυασμένης πρόβλεψης. Επιπλέον, σχεδιάζεται και υλοποιείται ένα αποσυζευγμένο υπηρεσιοστρεφές πλαίσιο για την ενθυλάκωση της επιλεγμένης μεθόδου σε περιβάλλοντα υπολογιστικών νεφών και το οποίο βασίζεται σε μαθηματικό λογισμικό ειδικού σκοπού. Με βάση το σχεδιασμό αυτό, είναι εφικτή η γρηγορότερη και ευκολότερη ανάπτυξη μεθόδων πρόβλεψης απόδοσης και η εναλλαγή τους. Επίσης λόγω της χαρακτηριστικής διαστρωμάτωσης, είναι εφικτή η εναλλαγή τεχνολογιών σε όλα τα ενδιάμεσα στρώματα, γεγονός που επιτρέπει την εκμετάλλευση καινοτόμων εξελίξεων στις αντίστοιχες υλοποιήσεις. Μέσω της ενδελεχούς ανάλυσης του πλαισίου αυτού, είναι εφικτή η ανίχνευση των κυριότερων σημείων καθυστέρησης και η δημιουργία εναλλακτικών μορφών υλοποίησης που μπορούν να εναλλάσονται σε πραγματικό χρόνο ώστε να ελαχιστοποιήσουν τις ανάγκες πόρων της υπηρεσίας πρόβλεψης. Επιπρόσθετα, η διατριβή αυτή αναλύει την αλληλεπίδραση των εφαρμογών σε ένα κοινόχρηστο υπηρεσιοστρεφές πλαίσιο εκτέλεσης όπως τα Υπολογιστικά Νέφη. Επιλέγεται ένας περιορισμένος αριθμός τυπικών δοκιμίων και ανιχνεύεται η μείωση της απόδοσής τους λόγω του διαμοιρασμού των φυσικών πόρων. Παράμετροι που λαμβάνονται υπόψη κατά τη διαδικασία αυτή περιλαμβάνουν επίσης χαρακτηριστικά χρονοδρομολόγησης στις μονάδες επεξεργασίας καθώς και τρόπους ανάθεσης των εργασιών σε πολυεπεξεργαστικές αρχιτεκτονικές. Με βάση τα πειραματικά δεδομένα, μοντέλα πρόβλεψης της αλληλεπίδρασης αυτής αναπτύσονται, με βάση τα οποία ο Πάροχος Υποδομών μπορεί να γνωρίζει εκ των προτέρων την επίδοση ενός συγκεκριμένου συνδυασμού εργασιών που ανατίθενται σε ένα φυσικό κόμβο. Με τον τρόπο αυτό μπορεί να επιλέξει τους βέλτιστους συνδυασμούς για την ομαλότερη λειτουργία της υποδομής και την εξασφάλιση της ποιότητας υπηρεσίας.The advent of service oriented infrastructures providing software and hardware as a service has rendered feasible the utilization of the latter by external users in the form of pay-per-use resources. In order to ensure the demanded quality of service during the provisioning of these resources, a multilayer translation and modeling mechanism is necessary, in order to convert the application terms that are offered in traditional service level agreement contracts to resource level attributes. During this process, specific requirements must be met and limitations overcome, that originate mainly from the existence of different entities (Software-as-a-Service, Platform-as-a-Service and Infrastructure-as-a-Service providers) along the value chain of service oriented infrastructures and architectures. In the current thesis different candidate methodologies are analyzed with regard to their ability to dynamically create application models in service oriented infrastructures and the fittest one is selected based on the aforementioned limitations and requirements. Its weaknesses are also investigated and innovative approaches are applied in order to render it applicable to the desired framework. The final form of this method (genetically optimized artificial neural networks) is validated in a variety of real world applications for predicting their performance based on the given hardware of execution, their high level parameters and configuration. What is more, a two level mechanism (workload forecasting and translation) extends its use in cases where the application workload cannot be foreseen by the owner of the service. Through the proposed framework, historical data are analyzed and patterns of usage are discovered that aid in the full automation of the management framework. Furthermore, a multi-layer, decoupled and service oriented framework, based on specialized numerical software, is designed and implemented for incorporating the selected method in cloud computing environments. Based on this scripting software, the development or evolution of the modeling methods is enhanced and aided. What is more, due to the decoupling of the layers, it is feasible to interchange technologies in all the involved layers without affecting the remaining framework, in order to exploit advances in the respective technological fields. Through the detailed analysis of the framework’s behaviour the performance bottlenecks are discovered and alternative design methods are created in order to interchange between them based on the runtime usage of the service. This way the needed resources for the framework are minimized and its performance is optimized. In addition, the thesis analyzes the performance interference between concurrently running virtual resources and applications in a multitenant service oriented execution framework like Clouds. A limited number of benchmarks is chosen that depicts characteristic usage of the hardware resources and the degradation of their performance is measured due to the coexistence in the same physical host. Parameters that are taken under consideration in this process include also real time scheduling configuration, necessary for guaranteeing a process’s time on the CPU, along with assignment patterns on multicore architectures. Based on these experimental data, suitable prediction models are created, based on which the IaaS provider may have a priori knowledge of the overhead inserted by the execution of a specific task combination on a physical host. Through this knowledge it may choose the optimal combinations for the smooth operation of the infrastructure and the guarantee of the Quality of Service offered by the resources.Γεώργιος Τ. Κουσιουρή

    The effects of scheduling, workload type and consolidation scenarios on virtual machine performance and their prediction through optimized artificial neural networks

    No full text
    The aim of this paper is to study and predict the effect of a number of critical parameters on the performance of virtual machines (VMs). These parameters include allocation percentages, real-time scheduling decisions and co-placement of VMs when these are deployed concurrently on the same physical node, as dictated by the server consolidation trend and the recent advances in the Cloud computing systems. Different combinations of VM workload types are investigated in relation to the aforementioned factors in order to find the optimal allocation strategies. What is more, different levels of memory sharing are applied, based on the coupling of VMs to cores on a multi-core architecture. For all the aforementioned cases, the effect on the score of specific benchmarks running inside the VMs is measured. Finally, a black box method based on genetically optimized artificial neural networks is inserted in order to investigate the degradation prediction ability a priori of the execution and is compared to the linear regression method

    Virtualised e-Learning on the IRMOS real-time cloud

    Get PDF
    Providing proper timeliness guarantees to distributed soft real-time applications in a virtualised infrastructure involves the careful use of various techniques at different levels, ranging from real-time scheduling mechanisms at the virtual-machine hypervisor level and QoS-aware protocols at the network level, to proper design methodologies and tools for stochastic modelling and runtime provisioning of the applications. This paper describes the way these techniques were combined to provide strong quality of service guarantees to interactive soft real-time applications in the Cloud Computing infrastructure that has been developed in the context of the IRMOS European Project. The efficiency of the developed infrastructure is demonstrated by two real interactive e-Learning applications, an e-Learning mobile content delivery application and a virtual world e-Learning application, both of which have been integrated into the IRMOS platfor

    Distributed Interactive Real-time Multimedia Applications: A Sampling and Analysis Framework

    No full text
    The advancements in distributed computing have driven the emergence of service-based infrastructures that allow for on-demand provision of IT assets. However, the complexity of characterizing an application’s behavior, and as a result the potential offered level of Quality of Service (QoS), introduces a number of challenges in the data collection and analysis process on the Service Providers’ side, especially for real time applications. The aforementioned complexity is increased due to additional factors that influence the application’s behavior, such as real time scheduling decisions, percentage of a node assigned to the application or application-generated workload. In this paper, we present a framework developed under the IRMOS EU-funded project that enables the sampling and gathering of the necessary dataset in order to analyze an application’s behavior. Processing of the resulting dataset is also conducted in order to extract useful conclusions regarding CPU allocation and scheduling decisions effect on the QoS. We demonstrate the operation of the proposed framework and evaluate its performance and effectiveness using an interactive real-time multimedia application, namely a webbased eLearning scenario

    Enterprise applications cloud rightsizing through a joint benchmarking and optimization approach

    Get PDF
    Migrating an application to the cloud environment requires non-functional properties consideration such as cost, performance and Quality of Service (QoS). Given the variety and the plethora of cloud offerings in addition with the consumption-based pricing models currently available in the cloud market, it is extremely complex to find the optimal deployment that fits the application requirements and provides the best QoS and cost trade-offs. In many cases the performance of these service offerings may vary depending on the congestion level, provider policies and how the application types that are intended to be executed upon them use the computing resources. A key challenge for customers before moving to Cloud is to know application behavior on cloud platforms in order to select the best-suited environment to host their application components in terms of performance and cost. In this paper, we propose a combined methodology and a set of tools that support the design and migration of enterprise applications to Cloud. Our tool chain includes: (i) the performance assessment of cloud services based on cloud benchmark results, (ii) a profiler/classifier mechanism that identifies the computing footprint of an arbitrary application and provides the best matching with a cloud service solution in terms of performance and cost, (iii) and a design space exploration tool, which is effective in identifying the deployment of minimum costs taking into account workload changes and providing QoS guarantees
    corecore